52 research outputs found
CCBS – a method to maintain memorability, accuracy of password submission and the effective password space in click-based visual passwords
Text passwords are vulnerable to many security attacks due to a number of reasons such as the insecure practices of end
users who select weak passwords to maintain their long term memory. As such, visual password (VP) solutions were
developed to maintain the security and usability of user authentication in collaborative systems. This paper focuses on the
challenges facing click-based visual password systems and proposes a novel method in response to them. For instance,
Hotspots reveal a serious vulnerability. They occur because users are attracted to specific parts of an image and neglect
other areas. Undertaking image analysis to identify these high probability areas can assist dictionary attacks.
Another concern is that click-based systems do not guide users towards the correct click-point they are aiming to
select. For instance, users might recall the correct spot or area but still fail to include their click within the tolerance
distance around the original click-point which results in more incorrect password submissions.
Nevertheless, the Passpoints study by Wiedenbeck et al., 2005 inspected the retention of their VP in comparison with
text passwords over the long term. Despite being cued-recall the successful rate of their VP submission was not superior
to text passwords as it decreased from 85% (the instant retention on the day of registration) to 55% after 2 weeks. This
result was identical to that of the text password in the same experiment. The successful submission rates after 6 weeks
were also 55% for both VP and text passwords.
This paper addresses these issues, and then presents a novel method (CCBS) as a usable solution supported by an
empirical proof. A user study is conducted and the results are evaluated against a comparative study
Responsibility and non-repudiation in resource-constrained Internet of Things scenarios
The proliferation and popularity of smart
autonomous systems necessitates the development
of methods and models for ensuring the effective
identification of their owners and controllers. The aim
of this paper is to critically discuss the responsibility of
Things and their impact on human affairs. This starts
with an in-depth analysis of IoT Characteristics such
as Autonomy, Ubiquity and Pervasiveness. We argue
that Things governed by a controller should have an
identifiable relationship between the two parties and
that authentication and non-repudiation are essential
characteristics in all IoT scenarios which require
trustworthy communications. However, resources can
be a problem, for instance, many Things are designed
to perform in low-powered hardware. Hence, we also
propose a protocol to demonstrate how we can achieve the
authenticity of participating Things in a connectionless
and resource-constrained environment
Adaptive Traffic Fingerprinting for Darknet Threat Intelligence
Darknet technology such as Tor has been used by various threat actors for
organising illegal activities and data exfiltration. As such, there is a case
for organisations to block such traffic, or to try and identify when it is used
and for what purposes. However, anonymity in cyberspace has always been a
domain of conflicting interests. While it gives enough power to nefarious
actors to masquerade their illegal activities, it is also the cornerstone to
facilitate freedom of speech and privacy. We present a proof of concept for a
novel algorithm that could form the fundamental pillar of a darknet-capable
Cyber Threat Intelligence platform. The solution can reduce anonymity of users
of Tor, and considers the existing visibility of network traffic before
optionally initiating targeted or widespread BGP interception. In combination
with server HTTP response manipulation, the algorithm attempts to reduce the
candidate data set to eliminate client-side traffic that is most unlikely to be
responsible for server-side connections of interest. Our test results show that
MITM manipulated server responses lead to expected changes received by the Tor
client. Using simulation data generated by shadow, we show that the detection
scheme is effective with false positive rate of 0.001, while sensitivity
detecting non-targets was 0.016+-0.127. Our algorithm could assist
collaborating organisations willing to share their threat intelligence or
cooperate during investigations.Comment: 26 page
Arabic text classification methods: Systematic literature review of primary studies
Recent research on Big Data proposed and evaluated a number of advanced techniques to gain meaningful information from the complex and large volume of data available on the World Wide Web. To achieve accurate text analysis, a process is usually initiated with a Text Classification (TC) method. Reviewing the very recent literature in this area shows that most studies are focused on English (and other scripts) while attempts on classifying Arabic texts remain relatively very limited. Hence, we intend to contribute the first Systematic Literature Review (SLR) utilizing a search protocol strictly to summarize key characteristics of the different TC techniques and methods used to classify Arabic text, this work also aims to identify and share a scientific evidence of the gap in current literature to help suggesting areas for further research. Our SLR explicitly investigates empirical evidence as a decision factor to include studies, then conclude which classifier produced more accurate results. Further, our findings identify the lack of standardized corpuses for Arabic text; authors compile their own, and most of the work is focused on Modern Arabic with very little done on Colloquial Arabic despite its wide use in Social Media Networks such as Twitter. In total, 1464 papers were surveyed from which 48 primary studies were included and analyzed
Web browser artefacts in private and portable modes: a forensic investigation
Web browsers are essential tools for accessing the internet. Extra complexities are added to forensic investigations when recovering browsing artefacts as portable and private browsing are now common and available in popular web browsers. Browsers claim that whilst operating in private mode, no data is stored on the system. This paper investigates whether the claims of web browsers discretion are true by analysing the remnants of browsing left by the latest versions of Internet Explorer, Chrome, Firefox, and Opera when used in a private browsing session, as a portable browser, and when the former is running in private mode. Some of our key findings show how forensic analysis of the file system recovers evidence from IE while running in private mode whereas other browsers seem to maintain better user privacy. We analyse volatile memory and demonstrate how physical memory by means of dump files, hibernate and page files are the key areas where evidence from all browsers will still be recoverable despite their mode or location they run from
AdPExT: designing a tool to assess information gleaned from browsers by online advertising platforms
The world of online advertising is directly dependent on data collection of the online browsing habits of individuals to enable effective advertisement targeting and retargeting. However, these data collection practices can cause leakage of private data belonging to website visitors (end-users) without their knowledge. The growing privacy concern of end-users is amplified by a lack of trust and understanding of what and how advertisement trackers are collecting and using their data. This paper presents an investigation to restore the trust or validate the concerns. We aim to facilitate the assessment of the actual end-user related data being collected by advertising platforms (APs) by means of a critical discussion but also the development of a new tool, AdPExT (Advertising Parameter Extraction Tool), which can be used to extract third-party parameter key-value pairs at an individual key-value level. Furthermore, we conduct a survey covering mostly United Kingdom-based frequent internet users to gather the perceived sensitivity sentiment for various representative tracking parameters. End-users have a definite concern with regards to advertisement tracking of sensitive data by global dominating platforms such as Facebook and Google
Recommended from our members
‘The language is disgusting and they refer to my disability’: the cyberharassment of disabled people
Disabled people face hostility and harassment in their sociocultural environment. The use of electronic-communications creates an online context that further reshape this discrimination. We explored the experiences of 19 disabled victims of cyberharassment. Five themes emerged from the study: disability and health consequences, family involvement, misrepresentation of self, perceived complexity, and lack of awareness and expertise. Cyberharassment incidents against disabled people were influenced by the pre-existing impairment, perceived hate-targeting, and perpetrators faking disability to get closer to victims online. Our findings highlight a growing issue requiring action and proper support
Reducing False Negatives in Ransomware Detection: A Critical Evaluation of Machine Learning Algorithms
Technological achievement and cybercriminal methodology are two parallel growing paths; protocols such as Tor and i2p (designed to offer confidentiality and anonymity) are being utilised to run ransomware companies operating under a Ransomware as a Service (RaaS) model. RaaS enables criminals with a limited technical ability to launch ransomware attacks. Several recent high-profile cases, such as the Colonial Pipeline attack and JBS Foods, involved forcing companies to pay enormous amounts of ransom money, indicating the difficulty for organisations of recovering from these attacks using traditional means, such as restoring backup systems. Hence, this is the benefit of intelligent early ransomware detection and eradication. This study offers a critical review of the literature on how we can use state-of-the-art machine learning (ML) models to detect ransomware. However, the results uncovered a tendency of previous works to report precision while overlooking the importance of other values in the confusion matrices, such as false negatives. Therefore, we also contribute a critical evaluation of ML models using a dataset of 730 malware and 735 benign samples to evaluate their suitability to mitigate ransomware at different stages of a detection system architecture and what that means in terms of cost. For example, the results have shown that an Artificial Neural Network (ANN) model will be the most suitable as it achieves the highest precision of 98.65%, a Youden’s index of 0.94, and a net benefit of 76.27%, however, the Random Forest model (lower precision of 92.73%) offered the benefit of having the lowest false-negative rate (0.00%). The risk of a false negative in this type of system is comparable to the unpredictable but typically large cost of ransomware infection, in comparison with the more predictable cost of the resources needed to filter false positives
Effective methods to detect metamorphic malware: A systematic review
The succeeding code for metamorphic Malware is routinely rewritten to
remain stealthy and undetected within infected environments. This characteristic is
maintained by means of encryption and decryption methods, obfuscation through
garbage code insertion, code transformation and registry modification which makes
detection very challenging. The main objective of this study is to contribute an
evidence-based narrative demonstrating the effectiveness of recent proposals. Sixteen
primary studies were included in this analysis based on a pre-defined protocol. The
majority of the reviewed detection methods used Opcode, Control Flow Graph (CFG)
and API Call Graph. Key challenges facing the detection of metamorphic malware
include code obfuscation, lack of dynamic capabilities to analyse code and application
difficulty. Methods were further analysed on the basis of their approach, limitation,
empirical evidence and key parameters such as dataset, Detection Rate (DR) and
False Positive Rate (FPR)
Testing Asymmetrical Effect of Exchange Rate on Saudi Service Sector Trade: A Non-Linear ARDL Approach
Present study explores the impact of devaluation and appreciation (negative and positive movements of exchange rate) on services sector trade by applying Non Linear ARDL technique of Shin et al. (2014). This study confirms asymmetrical effects in case of all sectors in short run and long run effects may improve the trade balance of all sectors. Overall devaluation confirms existence of J-curve after some lag. This study also found that appreciation of a Saudi currency may put up adverse effects on trade balance in all services sectors except travel, construction and tourism sectors. The increase in world income may support to enhance exports of Saudi Arabia, while rise in Saudi Arabia's income may have negative effects on trade balance of all services sectors except travel and tourism.
Keywords: Devaluation, Trade, Asymmetry
JEL Classifications: F31, F14, D82
- …